6,056 research outputs found

    Credit constraints and the propagation of the Great Depression in Germany

    Get PDF
    We evaluate the role played by loan supply shocks in the decline of investment and industrial production during the Great Depression in Germany from 1927 to 1932. We identify loan supply shocks in the context of a time varying parameter vector autoregression with stochastic volatility. Our results indicate that credit constraints were a significant driver of industrial production between 1927 and 1932, supporting the view that a structurally weak banking sector was an important contributor to the German Great Depression. We find further that loan supply shocks were an important driver of investment in the early phase of the depression, between 1927 and 1929, but not between 1930 and 1932. We suggest possible explanations for this puzzle and directions for future research

    A BAYESIAN ALTERNATIVE TO GENERALIZED CROSS ENTROPY SOLUTIONS FOR UNDERDETERMINED ECONOMETRIC MODELS

    Get PDF
    This paper presents a Bayesian alternative to Generalized Maximum Entropy (GME) and Generalized Cross Entropy (GCE) methods for deriving solutions to econometric models represented by underdetermined systems of equations. For certain types of econometric model specifications, the Bayesian approach provides fully equivalent results to GME-GCE techniques. However, in its general form, the proposed Bayesian methodology allows a more direct and straightforwardly interpretable formulation of available prior information and can reduce significantly the computational effort involved in finding solutions. The technique can be adapted to provide solutions in situations characterized by either informative or uninformative prior information.Underdetermined Equation Systems, Maximum Entropy, Bayesian Priors, Structural Estimation, Calibration, Research Methods/ Statistical Methods, C11, C13, C51,

    Conceptual mechanization studies for a horizon definition spacecraft structures and thermal subsystem

    Get PDF
    Conceptual mechanization for horizon definition spacecraft structures and thermal subsystem - spin-stabilized, hexagonal cylinder for launch of two-stage Improved Delta /DSV-3N

    Analysing Magnetism Using Scanning SQUID Microscopy

    Get PDF
    Scanning superconducting quantum interference device microscopy (SSM) is a scanning probe technique that images local magnetic flux, which allows for mapping of magnetic fields with high field and spatial accuracy. Many studies involving SSM have been published in the last decades, using SSM to make qualitative statements about magnetism. However, quantitative analysis using SSM has received less attention. In this work, we discuss several aspects of interpreting SSM images and methods to improve quantitative analysis. First, we analyse the spatial resolution and how it depends on several factors. Second, we discuss the analysis of SSM scans and the information obtained from the SSM data. Using simulations, we show how signals evolve as a function of changing scan height, SQUID loop size, magnetization strength and orientation. We also investigated 2-dimensional autocorrelation analysis to extract information about the size, shape and symmetry of magnetic features. Finally, we provide an outlook on possible future applications and improvements.Comment: 16 pages, 10 figure

    Computability and Adaptivity in CFD

    Get PDF
    We give a brief introduction to research on adaptive computational methods for laminar compressible and incompressible flow, and then focus on computability and adaptivity for turbulent incompressible flow, where we present a framework for adaptive finite element methods with duality- based a posteriori error control for chosen output quantities of interest. We show in concrete examples that outputs such as mean values in time of drag and lift of a bluff body in a turbulent flow are computable to a tolerance of a few percent, for a simple geometry using some hundred thousand mesh points, and for complex geometries some million mesh points

    Reconstructing phylogenetic level-1 networks from nondense binet and trinet sets

    Get PDF
    Binets and trinets are phylogenetic networks with two and three leaves, respectively. Here we consider the problem of deciding if there exists a binary level-1 phylogenetic network displaying a given set T of binary binets or trinets over a taxon set X, and constructing such a network whenever it exists. We show that this is NP-hard for trinets but polynomial-time solvable for binets. Moreover, we show that the problem is still polynomial-time solvable for inputs consisting of binets and trinets as long as the cycles in the trinets have size three. Finally, we present an O(3^{|X|} poly(|X|)) time algorithm for general sets of binets and trinets. The latter two algorithms generalise to instances containing level-1 networks with arbitrarily many leaves, and thus provide some of the first supernetwork algorithms for computing networks from a set of rooted 1 phylogenetic networks

    Assembling large, complex environmental metagenomes

    Full text link
    The large volumes of sequencing data required to sample complex environments deeply pose new challenges to sequence analysis approaches. De novo metagenomic assembly effectively reduces the total amount of data to be analyzed but requires significant computational resources. We apply two pre-assembly filtering approaches, digital normalization and partitioning, to make large metagenome assemblies more comput\ ationaly tractable. Using a human gut mock community dataset, we demonstrate that these methods result in assemblies nearly identical to assemblies from unprocessed data. We then assemble two large soil metagenomes from matched Iowa corn and native prairie soils. The predicted functional content and phylogenetic origin of the assembled contigs indicate significant taxonomic differences despite similar function. The assembly strategies presented are generic and can be extended to any metagenome; full source code is freely available under a BSD license.Comment: Includes supporting informatio

    Flexible taxonomic assignment of ambiguous sequencing reads

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To characterize the diversity of bacterial populations in metagenomic studies, sequencing reads need to be accurately assigned to taxonomic units in a given reference taxonomy. Reads that cannot be reliably assigned to a unique leaf in the taxonomy (<it>ambiguous reads</it>) are typically assigned to the lowest common ancestor of the set of species that match it. This introduces a potentially severe error in the estimation of bacteria present in the sample due to false positives, since all species in the subtree rooted at the ancestor are implicitly assigned to the read even though many of them may not match it.</p> <p>Results</p> <p>We present a method that maps each read to a node in the taxonomy that minimizes a penalty score while balancing the relevance of precision and recall in the assignment through a parameter <it>q</it>. This mapping can be obtained in time linear in the number of matching sequences, because LCA queries to the reference taxonomy take constant time. When applied to six different metagenomic datasets, our algorithm produces different taxonomic distributions depending on whether coverage or precision is maximized. Including information on the quality of the reads reduces the number of unassigned reads but increases the number of ambiguous reads, stressing the relevance of our method. Finally, two measures of performance are described and results with a set of artificially generated datasets are discussed.</p> <p>Conclusions</p> <p>The assignment strategy of sequencing reads introduced in this paper is a versatile and a quick method to study bacterial communities. The bacterial composition of the analyzed samples can vary significantly depending on how ambiguous reads are assigned depending on the value of the <it>q </it>parameter. Validation of our results in an artificial dataset confirm that a combination of values of <it>q </it>produces the most accurate results.</p
    • …
    corecore